ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.05793
17
15

A Benchmark for Generalizable and Interpretable Temporal Question Answering over Knowledge Bases

15 January 2022
S. Neelam
Udit Sharma
Hima P. Karanam
S. Ikbal
Pavan Kapanipathi
Ibrahim Abdelaziz
Nandana Mihindukulasooriya
Young-Suk Lee
Santosh K. Srivastava
Cezar Pendus
Saswati Dana
Dinesh Garg
Achille Fokoue
G P Shrivatsa Bhargav
Dinesh Khandelwal
Srinivas Ravishankar
Sairam Gurajada
Maria Chang
Rosario A. Uceda-Sosa
Salim Roukos
Alexander G. Gray
Guilherme Lima
Ryan Riegel
F. Luus
L. V. Subramaniam
ArXivPDFHTML
Abstract

Knowledge Base Question Answering (KBQA) tasks that involve complex reasoning are emerging as an important research direction. However, most existing KBQA datasets focus primarily on generic multi-hop reasoning over explicit facts, largely ignoring other reasoning types such as temporal, spatial, and taxonomic reasoning. In this paper, we present a benchmark dataset for temporal reasoning, TempQA-WD, to encourage research in extending the present approaches to target a more challenging set of complex reasoning tasks. Specifically, our benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata. The TempQA-WD dataset is available at https://github.com/IBM/tempqa-wd.

View on arXiv
Comments on this paper