ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.02335
  4. Cited By
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition

Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

5 May 2020
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
    AAML
ArXiv (abs)PDFHTML

Papers citing "Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition"

7 / 7 papers shown
TV-TREES: Multimodal Entailment Trees for Neuro-Symbolic Video Reasoning
TV-TREES: Multimodal Entailment Trees for Neuro-Symbolic Video Reasoning
Kate Sanders
Nathaniel Weir
Benjamin Van Durme
LRM
270
16
0
29 Feb 2024
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
383
10
0
30 Mar 2023
Silent Vulnerable Dependency Alert Prediction with Vulnerability Key
  Aspect Explanation
Silent Vulnerable Dependency Alert Prediction with Vulnerability Key Aspect ExplanationInternational Conference on Software Engineering (ICSE), 2023
Jiamou Sun
Zhenchang Xing
Qinghua Lu
Xiwei Xu
Liming Zhu
Thong Hoang
Dehai Zhao
181
19
0
15 Feb 2023
Temporal Relevance Analysis for Video Action Models
Temporal Relevance Analysis for Video Action Models
Quanfu Fan
Donghyun Kim
Chun-Fu Chen
Chen
Stan Sclaroff
Kate Saenko
Sarah Adel Bargal
FAtt
174
1
0
25 Apr 2022
Explainable Activity Recognition for Smart Home Systems
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
219
63
0
20 May 2021
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
  Learning Reduces User Trust and Impressions of Model Accuracy
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model AccuracyAAAI Conference on Human Computation & Crowdsourcing (HCOMP), 2020
Donald R. Honeycutt
Mahsan Nourani
Eric D. Ragan
HAI
282
77
0
28 Aug 2020
The Role of Domain Expertise in User Trust and the Impact of First
  Impressions with Intelligent Systems
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
Mahsan Nourani
J. King
Eric D. Ragan
222
120
0
20 Aug 2020
1
Page 1 of 1