ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.02664
  4. Cited By
At Which Level Should We Extract? An Empirical Analysis on Extractive
  Document Summarization

At Which Level Should We Extract? An Empirical Analysis on Extractive Document Summarization

6 April 2020
Qingyu Zhou
Furu Wei
Ming Zhou
ArXivPDFHTML

Papers citing "At Which Level Should We Extract? An Empirical Analysis on Extractive Document Summarization"

4 / 4 papers shown
Title
Exploring Optimal Granularity for Extractive Summarization of
  Unstructured Health Records: Analysis of the Largest Multi-Institutional
  Archive of Health Records in Japan
Exploring Optimal Granularity for Extractive Summarization of Unstructured Health Records: Analysis of the Largest Multi-Institutional Archive of Health Records in Japan
Kenichiro Ando
T. Okumura
Mamoru Komachi
Hiromasa Horiguchi
Yuji Matsumoto
32
7
0
20 Sep 2022
Generating Tips from Song Reviews: A New Dataset and Framework
Generating Tips from Song Reviews: A New Dataset and Framework
Jingya Zang
Cuiyun Gao
Yupan Chen
Ruifeng Xu
Lanjun Zhou
Xuan Wang
18
1
0
14 May 2022
SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline
SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline
Jiaxin Ju
Ming Liu
Longxiang Gao
Shirui Pan
60
13
0
19 Oct 2020
SummaRuNNer: A Recurrent Neural Network based Sequence Model for
  Extractive Summarization of Documents
SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents
Ramesh Nallapati
Feifei Zhai
Bowen Zhou
207
1,254
0
14 Nov 2016
1