ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.20318
14
2

Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries

30 May 2024
Roberto Ceraolo
Dmitrii Kharlapenko
Amélie Reymond
Rada Mihalcea
Mrinmaya Sachan
Bernhard Schölkopf
Zhijing Jin
Zhijing Jin
    CML
ArXivPDFHTML
Abstract

Recent progress in Large Language Model (LLM) technology has changed our role in interacting with these models. Instead of primarily testing these models with questions we already know answers to, we are now using them for queries where the answers are unknown to us, driven by human curiosity. This shift highlights the growing need to understand curiosity-driven human questions - those that are more complex, open-ended, and reflective of real-world needs. To this end, we present Quriosity, a collection of 13.5K naturally occurring questions from three diverse sources: human-to-search-engine queries, human-to-human interactions, and human-to-LLM conversations. Our comprehensive collection enables a rich understanding of human curiosity across various domains and contexts. Our analysis reveals a significant presence of causal questions (up to 42%) in the dataset, for which we develop an iterative prompt improvement framework to identify all causal queries and examine their unique linguistic properties, cognitive complexity and source distribution. Our paper paves the way for future work on causal question identification and open-ended chatbot interactions.

View on arXiv
@article{ceraolo2025_2405.20318,
  title={ Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries },
  author={ Roberto Ceraolo and Dmitrii Kharlapenko and Ahmad Khan and Amélie Reymond and Rada Mihalcea and Bernhard Schölkopf and Mrinmaya Sachan and Zhijing Jin },
  journal={arXiv preprint arXiv:2405.20318},
  year={ 2025 }
}
Comments on this paper