ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.16292
  4. Cited By
XAIR: A Framework of Explainable AI in Augmented Reality

XAIR: A Framework of Explainable AI in Augmented Reality

28 March 2023
Xuhai Xu
Anna Yu
Tanya R. Jonker
Kashyap Todi
Feiyu Lu
Xun Qian
João Marcelo Evangelista Belo
Tianyi Wang
Michelle Li
Aran Mun
Te-Yen Wu
Junxiao Shen
Ting Zhang
Narine Kokhlikyan
Fulton Wang
P. Sorenson
Sophie Kahyun Kim
Hrvoje Benko
ArXivPDFHTML

Papers citing "XAIR: A Framework of Explainable AI in Augmented Reality"

18 / 18 papers shown
Title
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Xinru Wang
Mengjie Yu
Hannah Nguyen
Michael Iuzzolino
Tianyi Wang
...
Ting Zhang
Naveen Sendhilnathan
Hrvoje Benko
Haijun Xia
Tanya R. Jonker
48
0
0
26 Feb 2025
AutoMR: A Universal Time Series Motion Recognition Pipeline
AutoMR: A Universal Time Series Motion Recognition Pipeline
Likun Zhang
Sicheng Yang
Z. Wang
Haining Liang
Junxiao Shen
64
0
0
24 Feb 2025
An Interaction Design Toolkit for Physical Task Guidance with Artificial
  Intelligence and Mixed Reality
An Interaction Design Toolkit for Physical Task Guidance with Artificial Intelligence and Mixed Reality
Arthur Caetano
Alejandro Aponte
Misha Sra
77
0
0
22 Dec 2024
TOM: A Development Platform For Wearable Intelligent Assistants
TOM: A Development Platform For Wearable Intelligent Assistants
Nuwan Janaka
Shengdong Zhao
David Hsu
Sherisse Tan Jing Wen
Chun-Keat Koh
AI4TS
23
5
0
22 Jul 2024
Demonstrating PilotAR: A Tool to Assist Wizard-of-Oz Pilot Studies with
  OHMD
Demonstrating PilotAR: A Tool to Assist Wizard-of-Oz Pilot Studies with OHMD
Nuwan Janaka
Runze Cai
Shengdong Zhao
David Hsu
22
0
0
17 Jul 2024
PARSE-Ego4D: Personal Action Recommendation Suggestions for Egocentric
  Videos
PARSE-Ego4D: Personal Action Recommendation Suggestions for Egocentric Videos
Steven Abreu
Tiffany D. Do
Karan Ahuja
Eric J. Gonzalez
Lee Payne
Daniel J. McDuff
Mar González-Franco
32
2
0
14 Jun 2024
Implicit gaze research for XR systems
Implicit gaze research for XR systems
Naveen Sendhilnathan
Ajoy S. Fernandes
Michael J. Proulx
Tanya R. Jonker
26
2
0
22 May 2024
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios
G-VOILA: Gaze-Facilitated Information Querying in Daily Scenarios
Zeyu Wang
Yuanchun Shi
Yuntao wang
Yuchen Yao
Kun Yan
Yuhan Wang
Lei Ji
Xuhai Xu
Chun Yu
24
7
0
13 May 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
35
3
0
04 May 2024
Exploring Algorithmic Explainability: Generating Explainable AI Insights
  for Personalized Clinical Decision Support Focused on Cannabis Intoxication
  in Young Adults
Exploring Algorithmic Explainability: Generating Explainable AI Insights for Personalized Clinical Decision Support Focused on Cannabis Intoxication in Young Adults
Tongze Zhang
Tammy Chung
Anind Dey
Sang Won Bae
21
3
0
22 Apr 2024
Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed
  Reality
Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed Reality
Mengjie Yu
Dustin Harris
Ian Jones
Ting Zhang
Yue Liu
...
Krista E. Taylor
Zhenhong Hu
Mary A. Hood
Hrvoje Benko
Tanya R. Jonker
13
0
0
21 Apr 2024
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun
  Disambiguation in Wearable Augmented Reality
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality
Jaewook Lee
Jun Wang
Elizabeth Brown
Liam Chu
Sebastian S. Rodriguez
Jon E. Froehlich
19
34
0
12 Apr 2024
Designing for Human Operations on the Moon: Challenges and Opportunities
  of Navigational HUD Interfaces
Designing for Human Operations on the Moon: Challenges and Opportunities of Navigational HUD Interfaces
Leonie Bensch
Tommy Nilsson
Jan C. Wulkop
Paul Demedeiros
N. Herzberger
...
A. Gerndt
Frank Flemisch
Florian Dufresne
Georgia Albuquerque
Aidan Cowley
49
1
0
24 Feb 2024
LLMR: Real-time Prompting of Interactive Worlds using Large Language
  Models
LLMR: Real-time Prompting of Interactive Worlds using Large Language Models
Fernanda De La Torre
Cathy Mengying Fang
Han Huang
Andrzej Banburski-Fahey
Judith Amores Fernandez
Jaron Lanier
38
45
0
21 Sep 2023
Technical Understanding from IML Hands-on Experience: A Study through a
  Public Event for Science Museum Visitors
Technical Understanding from IML Hands-on Experience: A Study through a Public Event for Science Museum Visitors
Wataru Kawabe
Yuri Nakao
Akihisa Shitara
Yusuke Sugano
23
1
0
10 May 2023
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Kristen Grauman
Andrew Westbury
Eugene Byrne
Zachary Chavis
Antonino Furnari
...
Mike Zheng Shou
Antonio Torralba
Lorenzo Torresani
Mingfei Yan
Jitendra Malik
EgoV
224
1,018
0
13 Oct 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
43
415
0
15 Feb 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
1