ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.10766
  4. Cited By
Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
  Act?

Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?

21 February 2023
Balint Gyevnar
Nick Ferguson
Burkhard Schafer
ArXivPDFHTML

Papers citing "Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?"

9 / 9 papers shown
Title
Building Symbiotic AI: Reviewing the AI Act for a Human-Centred, Principle-Based Framework
Building Symbiotic AI: Reviewing the AI Act for a Human-Centred, Principle-Based Framework
Miriana Calvano
Antonio Curci
Giuseppe Desolda
Andrea Esposito
R. Lanzilotti
Antonio Piccinno
51
1
0
14 Jan 2025
How should AI decisions be explained? Requirements for Explanations from
  the Perspective of European Law
How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law
Benjamin Frész
Elena Dubovitskaya
Danilo Brajovic
Marco F. Huber
Christian Horz
49
7
0
19 Apr 2024
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
Balint Gyevnar
Stephanie Droop
Tadeg Quillien
Shay B. Cohen
Neil R. Bramley
Christopher G. Lucas
Stefano V. Albrecht
33
2
0
11 Mar 2024
Sora as an AGI World Model? A Complete Survey on Text-to-Video
  Generation
Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation
Joseph Cho
Fachrina Dewi Puspitasari
Sheng Zheng
Jingyao Zheng
Lik-Hang Lee
Tae-Ho Kim
Choong Seon Hong
Chaoning Zhang
EGVM
VGen
36
40
0
08 Mar 2024
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic
  Review
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
Anton Kuznietsov
Balint Gyevnar
Cheng Wang
Steven Peters
Stefano V. Albrecht
XAI
28
26
0
08 Feb 2024
Causal Explanations for Sequential Decision-Making in Multi-Agent
  Systems
Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
Balint Gyevnar
Chenghe Wang
Christopher G. Lucas
Shay B. Cohen
Stefano V. Albrecht
CML
26
10
0
21 Feb 2023
A Human-Centric Assessment Framework for AI
A Human-Centric Assessment Framework for AI
S. Saralajew
Ammar Shaker
Zhao Xu
Kiril Gashteovski
Bhushan Kotnis
Wiem Ben-Rim
Jürgen Quittek
Carolin (Haas) Lawrence
24
6
0
25 May 2022
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
62
416
0
15 Feb 2021
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
257
620
0
04 Dec 2018
1