ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04825
35
0

VTechAGP: An Academic-to-General-Audience Text Paraphrase Dataset and Benchmark Models

7 November 2024
Ming Cheng
Jiaying Gong
Chenhan Yuan
William A. Ingram
Edward A. Fox
Hoda Eldardiry
ArXivPDFHTML
Abstract

Existing text simplification or paraphrase datasets mainly focus on sentence-level text generation in a general domain. These datasets are typically developed without using domain knowledge. In this paper, we release a novel dataset, VTechAGP, which is the first academic-to-general-audience text paraphrase dataset consisting of document-level these and dissertation academic and general-audience abstract pairs from 8 colleges authored over 25 years. We also propose a novel dynamic soft prompt generative language model, DSPT5. For training, we leverage a contrastive-generative loss function to learn the keyword vectors in the dynamic prompt. For inference, we adopt a crowd-sampling decoding strategy at both semantic and structural levels to further select the best output candidate. We evaluate DSPT5 and various state-of-the-art large language models (LLMs) from multiple perspectives. Results demonstrate that the SOTA LLMs do not provide satisfactory outcomes, while the lightweight DSPT5 can achieve competitive results. To the best of our knowledge, we are the first to build a benchmark dataset and solutions for academic-to-general-audience text paraphrase dataset. Models will be public after acceptance.

View on arXiv
@article{cheng2025_2411.04825,
  title={ VTechAGP: An Academic-to-General-Audience Text Paraphrase Dataset and Benchmark Models },
  author={ Ming Cheng and Jiaying Gong and Chenhan Yuan and William A. Ingram and Edward Fox and Hoda Eldardiry },
  journal={arXiv preprint arXiv:2411.04825},
  year={ 2025 }
}
Comments on this paper