ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.05307
  4. Cited By
Explainability of deep vision-based autonomous driving systems: Review
  and challenges

Explainability of deep vision-based autonomous driving systems: Review and challenges

13 January 2021
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
    XAI
ArXivPDFHTML

Papers citing "Explainability of deep vision-based autonomous driving systems: Review and challenges"

38 / 88 papers shown
Title
Behavioral Intention Prediction in Driving Scenes: A Survey
Behavioral Intention Prediction in Driving Scenes: A Survey
Jianwu Fang
Fan Wang
Jianru Xue
Tat-Seng Chua
42
42
0
01 Nov 2022
LiDAR-as-Camera for End-to-End Driving
LiDAR-as-Camera for End-to-End Driving
Ardi Tampuu
Romet Aidla
J. Gent
Tambet Matiisen
20
18
0
30 Jun 2022
Trajectory-guided Control Prediction for End-to-end Autonomous Driving:
  A Simple yet Strong Baseline
Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline
Peng Wu
Xiaosong Jia
Li Chen
Junchi Yan
Hongyang Li
Yu Qiao
22
181
0
16 Jun 2022
Slim-neck by GSConv: A lightweight-design for real-time detector
  architectures
Slim-neck by GSConv: A lightweight-design for real-time detector architectures
Hulin Li
Jun Li
Hanbing Wei
Zheng Liu
Zhenfei Zhan
Qiliang Ren
10
144
0
06 Jun 2022
Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a
  Pedestrian Automatic Emergency Brake System
Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System
Markus Borg
Jens Henriksson
Kasper Socha
Olof Lennartsson
Elias Sonnsjo Lonegren
T. Bui
Piotr Tomaszewski
S. Sathyamoorthy
Sebastian Brink
M. H. Moghadam
6
22
0
16 Apr 2022
Is my Driver Observation Model Overconfident? Input-guided Calibration
  Networks for Reliable and Interpretable Confidence Estimates
Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates
Alina Roitberg
Kunyu Peng
David Schneider
Kailun Yang
Marios Koulakis
Manuel Martínez
Rainer Stiefelhagen
UQCV
20
8
0
10 Apr 2022
Investigation of Factorized Optical Flows as Mid-Level Representations
Investigation of Factorized Optical Flows as Mid-Level Representations
Hsuan-Kung Yang
Tsu-Ching Hsiao
Tingbo Liao
Hsu-Shen Liu
Li-Yuan Tsao
Tzu-Wen Wang
Shan Yang
Yu-Wen Chen
Huang-ru Liao
Chun-Yi Lee
17
3
0
09 Mar 2022
Attacks and Faults Injection in Self-Driving Agents on the Carla
  Simulator -- Experience Report
Attacks and Faults Injection in Self-Driving Agents on the Carla Simulator -- Experience Report
Niccolò Piazzesi
Massimo Hong
Andrea Ceccarelli
AAML
11
3
0
25 Feb 2022
Explainable Artificial Intelligence for Autonomous Driving: A
  Comprehensive Overview and Field Guide for Future Research Directions
Explainable Artificial Intelligence for Autonomous Driving: A Comprehensive Overview and Field Guide for Future Research Directions
Shahin Atakishiyev
Mohammad Salameh
Hengshuai Yao
Randy Goebel
17
127
0
21 Dec 2021
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
17
97
0
06 Dec 2021
PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous
  Car
PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car
Tina Chen
Taotao Jing
Renran Tian
Yaobin Chen
Joshua E. Domeyer
Heishiro Toyoda
Rini Sherony
Zhengming Ding
10
17
0
05 Dec 2021
Towards Safe, Explainable, and Regulated Autonomous Driving
Towards Safe, Explainable, and Regulated Autonomous Driving
Shahin Atakishiyev
Mohammad Salameh
Hengshuai Yao
Randy Goebel
23
11
0
20 Nov 2021
STEEX: Steering Counterfactual Explanations with Semantics
STEEX: Steering Counterfactual Explanations with Semantics
P. Jacob
Éloi Zablocki
H. Ben-younes
Mickaël Chen
P. Pérez
Matthieu Cord
11
42
0
17 Nov 2021
Yaw-Guided Imitation Learning for Autonomous Driving in Urban
  Environments
Yaw-Guided Imitation Learning for Autonomous Driving in Urban Environments
Yandong Liu
Chengzhong Xu
Hui Kong
11
0
0
11 Nov 2021
A Versatile and Efficient Reinforcement Learning Framework for
  Autonomous Driving
A Versatile and Efficient Reinforcement Learning Framework for Autonomous Driving
Guan-Bo Wang
Haoyi Niu
Desheng Zhu
Jianming Hu
Xianyuan Zhan
Guyue Zhou
OffRL
8
2
0
22 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
105
349
0
04 Oct 2021
Raising context awareness in motion forecasting
Raising context awareness in motion forecasting
H. Ben-younes
Éloi Zablocki
Mickaël Chen
P. Pérez
Matthieu Cord
TTA
24
11
0
16 Sep 2021
Attention-like feature explanation for tabular data
Attention-like feature explanation for tabular data
A. Konstantinov
Lev V. Utkin
FAtt
8
4
0
10 Aug 2021
Towards explainable artificial intelligence (XAI) for early anticipation
  of traffic accidents
Towards explainable artificial intelligence (XAI) for early anticipation of traffic accidents
Muhammad Monjurul Karim
Yu Li
Ruwen Qin
26
6
0
31 Jul 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability
  Distributions under Limited Training Data
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
13
5
0
16 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
6
109
0
17 May 2021
SurvNAM: The machine learning survival model explanation
SurvNAM: The machine learning survival model explanation
Lev V. Utkin
Egor D. Satyukov
A. Konstantinov
AAML
FAtt
31
28
0
18 Apr 2021
Near-field Perception for Low-Speed Vehicle Automation using
  Surround-view Fisheye Cameras
Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras
Ciarán Eising
Jonathan Horgan
S. Yogamani
11
35
0
31 Mar 2021
Explanations in Autonomous Driving: A Survey
Explanations in Autonomous Driving: A Survey
Daniel Omeiza
Helena Webb
Marina Jirotka
Lars Kunze
6
147
0
09 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
11
17
0
04 Mar 2021
Driving Behavior Explanation with Multi-level Fusion
Driving Behavior Explanation with Multi-level Fusion
H. Ben-younes
Éloi Zablocki
Patrick Pérez
Matthieu Cord
11
30
0
09 Dec 2020
Lightweight Generative Adversarial Networks for Text-Guided Image
  Manipulation
Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation
Bowen Li
Xiaojuan Qi
Philip H. S. Torr
Thomas Lukasiewicz
GAN
102
68
0
23 Oct 2020
Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
  Object Identification via Causal Inference
Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk Object Identification via Causal Inference
Chengxi Li
Stanley H. Chan
Yi-Ting Chen
CML
76
51
0
05 Mar 2020
Deep Reinforcement Learning for Autonomous Driving: A Survey
Deep Reinforcement Learning for Autonomous Driving: A Survey
B. R. Kiran
Ibrahim Sobh
V. Talpaert
Patrick Mannion
A. A. Sallab
S. Yogamani
P. Pérez
137
1,599
0
02 Feb 2020
Interpretable Self-Attention Temporal Reasoning for Driving Behavior
  Understanding
Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding
Yi-Chieh Liu
Yung-An Hsieh
Min-Hung Chen
Chao-Han Huck Yang
Jesper N. Tegnér
Y. Tsai
21
19
0
06 Nov 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
286
4,143
0
23 Aug 2019
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,183
0
12 Dec 2018
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
249
618
0
04 Dec 2018
Deep Ordinal Regression Network for Monocular Depth Estimation
Deep Ordinal Regression Network for Monocular Depth Estimation
Huan Fu
Mingming Gong
Chaohui Wang
Kayhan Batmanghelich
Dacheng Tao
MDE
180
1,687
0
06 Jun 2018
Learning to Anonymize Faces for Privacy Preserving Action Detection
Learning to Anonymize Faces for Privacy Preserving Action Detection
Zhongzheng Ren
Yong Jae Lee
Michael S. Ryoo
CVBM
PICV
78
200
0
30 Mar 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
219
1,818
0
03 Feb 2017
Safety Verification of Deep Neural Networks
Safety Verification of Deep Neural Networks
Xiaowei Huang
M. Kwiatkowska
Sen Wang
Min Wu
AAML
178
883
0
21 Oct 2016
Previous
12