ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14142
30
0

Imagery as Inquiry: Exploring A Multimodal Dataset for Conversational Recommendation

23 May 2024
Se-eun Yoon
Hyunsik Jeon
Julian McAuley
ArXivPDFHTML
Abstract

We introduce a multimodal dataset where users express preferences through images. These images encompass a broad spectrum of visual expressions ranging from landscapes to artistic depictions. Users request recommendations for books or music that evoke similar feelings to those captured in the images, and recommendations are endorsed by the community through upvotes. This dataset supports two recommendation tasks: title generation and multiple-choice selection. Our experiments with large foundation models reveal their limitations in these tasks. Particularly, vision-language models show no significant advantage over language-only counterparts that use descriptions, which we hypothesize is due to underutilized visual capabilities. To better harness these abilities, we propose the chain-of-imagery prompting, which results in notable improvements. We release our code and datasets.

View on arXiv
@article{yoon2025_2405.14142,
  title={ Imagery as Inquiry: Exploring A Multimodal Dataset for Conversational Recommendation },
  author={ Se-eun Yoon and Hyunsik Jeon and Julian McAuley },
  journal={arXiv preprint arXiv:2405.14142},
  year={ 2025 }
}
Comments on this paper