ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15768
32
0

Can one size fit all?: Measuring Failure in Multi-Document Summarization Domain Transfer

20 March 2025
Alexandra DeLucia
Mark Dredze
ArXivPDFHTML
Abstract

Abstractive multi-document summarization (MDS) is the task of automatically summarizing information in multiple documents, from news articles to conversations with multiple speakers. The training approaches for current MDS models can be grouped into four approaches: end-to-end with special pre-training ("direct"), chunk-then-summarize, extract-then-summarize, and inference with GPT-style models. In this work, we evaluate MDS models across training approaches, domains, and dimensions (reference similarity, quality, and factuality), to analyze how and why models trained on one domain can fail to summarize documents from another (News, Science, and Conversation) in the zero-shot domain transfer setting. We define domain-transfer "failure" as a decrease in factuality, higher deviation from the target, and a general decrease in summary quality. In addition to exploring domain transfer for MDS models, we examine potential issues with applying popular summarization metrics out-of-the-box.

View on arXiv
@article{delucia2025_2503.15768,
  title={ Can one size fit all?: Measuring Failure in Multi-Document Summarization Domain Transfer },
  author={ Alexandra DeLucia and Mark Dredze },
  journal={arXiv preprint arXiv:2503.15768},
  year={ 2025 }
}
Comments on this paper