Plan-guided summarization attempts to reduce hallucinations in small language models (SLMs) by grounding generated summaries to the source text, typically by targeting fine-grained details such as dates or named entities. In this work, we investigate whether plan-based approaches in SLMs improve summarization in long document, narrative tasks. Narrative texts' length and complexity often mean they are difficult to summarize faithfully. We analyze existing plan-guided solutions targeting fine-grained details, and also propose our own higher-level, narrative-based plan formulation. Our results show that neither approach significantly improves on a baseline without planning in either summary quality or faithfulness. Human evaluation reveals that while plan-guided approaches are often well grounded to their plan, plans are equally likely to contain hallucinations compared to summaries. As a result, the plan-guided summaries are just as unfaithful as those from models without planning. Our work serves as a cautionary tale to plan-guided approaches to summarization, especially for long, complex domains such as narrative texts.
View on arXiv@article{grenander2025_2504.09071, title={ Exploration of Plan-Guided Summarization for Narrative Texts: the Case of Small Language Models }, author={ Matt Grenander and Siddharth Varia and Paula Czarnowska and Yogarshi Vyas and Kishaloy Halder and Bonan Min }, journal={arXiv preprint arXiv:2504.09071}, year={ 2025 } }