ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.07517
  4. Cited By
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

20 March 2018
Gabrielle Ras
Marcel van Gerven
W. Haselager
    XAI
ArXivPDFHTML

Papers citing "Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges"

21 / 21 papers shown
Title
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
24
0
0
09 May 2025
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards
Xinyi Yang
Liang Zeng
Heng Dong
C. Yu
X. Wu
H. Yang
Yu Wang
Milind Tambe
Tonghan Wang
68
2
0
18 Feb 2025
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
33
4
0
29 Apr 2024
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
29
1
0
17 Feb 2023
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
27
18
0
10 Nov 2022
A Fast Attention Network for Joint Intent Detection and Slot Filling on
  Edge Devices
A Fast Attention Network for Joint Intent Detection and Slot Filling on Edge Devices
Liang Huang
Senjie Liang
Feiyang Ye
Nan Gao
48
3
0
16 May 2022
The Need for Ethical, Responsible, and Trustworthy Artificial
  Intelligence for Environmental Sciences
The Need for Ethical, Responsible, and Trustworthy Artificial Intelligence for Environmental Sciences
A. McGovern
I. Ebert‐Uphoff
D. Gagne
A. Bostrom
17
64
0
15 Dec 2021
Model Doctor: A Simple Gradient Aggregation Strategy for Diagnosing and
  Treating CNN Classifiers
Model Doctor: A Simple Gradient Aggregation Strategy for Diagnosing and Treating CNN Classifiers
Zunlei Feng
Jiacong Hu
Sai Wu
Xiaotian Yu
Jie Song
Mingli Song
21
13
0
09 Dec 2021
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and
  Future Opportunities
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
Waddah Saeed
C. Omlin
XAI
34
414
0
11 Nov 2021
What Can Knowledge Bring to Machine Learning? -- A Survey of Low-shot
  Learning for Structured Data
What Can Knowledge Bring to Machine Learning? -- A Survey of Low-shot Learning for Structured Data
Yang Hu
Adriane P. Chapman
Guihua Wen
Dame Wendy Hall
34
24
0
11 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
16
137
0
17 May 2021
Property Inference Attacks on Convolutional Neural Networks: Influence
  and Implications of Target Model's Complexity
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
19
33
0
27 Apr 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
370
0
30 Apr 2020
From Data to Actions in Intelligent Transportation Systems: a
  Prescription of Functional Requirements for Model Actionability
From Data to Actions in Intelligent Transportation Systems: a Prescription of Functional Requirements for Model Actionability
I. Laña
J. S. Medina
E. Vlahogianni
Javier Del Ser
19
51
0
06 Feb 2020
Transparency and Trust in Human-AI-Interaction: The Role of
  Model-Agnostic Explanations in Computer Vision-Based Decision Support
Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support
Christian Meske
Enrico Bunde
17
7
0
04 Feb 2020
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven,
  AI-Enabled Medical Imaging Analysis
CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis
Yao Xie
Melody Chen
David Kao
Ge Gao
Xiang Ánthony' Chen
23
125
0
15 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
35
701
0
08 Jan 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
11
6,106
0
22 Oct 2019
Interpretable to Whom? A Role-based Model for Analyzing Interpretable
  Machine Learning Systems
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
Richard J. Tomsett
Dave Braines
Daniel Harborne
Alun D. Preece
Supriyo Chakraborty
FaML
21
164
0
20 Jun 2018
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,233
0
24 Jun 2017
1