ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.12476
  4. Cited By
Leveraging Locality in Abstractive Text Summarization

Leveraging Locality in Abstractive Text Summarization

25 May 2022
Yixin Liu
Ansong Ni
Linyong Nan
Budhaditya Deb
Chenguang Zhu
Ahmed Hassan Awadallah
Dragomir R. Radev
ArXivPDFHTML

Papers citing "Leveraging Locality in Abstractive Text Summarization"

13 / 13 papers shown
Title
Equipping Transformer with Random-Access Reading for Long-Context
  Understanding
Equipping Transformer with Random-Access Reading for Long-Context Understanding
Chenghao Yang
Zi Yang
Nan Hua
21
1
0
21 May 2024
Chunk, Align, Select: A Simple Long-sequence Processing Method for
  Transformers
Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Jiawen Xie
Pengyu Cheng
Xiao Liang
Yong Dai
Nan Du
32
2
0
25 Aug 2023
AWESOME: GPU Memory-constrained Long Document Summarization using Memory
  Mechanism and Global Salient Content
AWESOME: GPU Memory-constrained Long Document Summarization using Memory Mechanism and Global Salient Content
Shuyang Cao
Lu Wang
10
5
0
24 May 2023
Abstractive Text Summarization Using the BRIO Training Paradigm
Abstractive Text Summarization Using the BRIO Training Paradigm
Khang Nhut Lam
Thieu Gia Doan
Khang Thua Pham
Jugal Kalita
16
8
0
23 May 2023
A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document
  Summarization
A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document Summarization
Chenhui Shen
Liying Cheng
Xuan-Phi Nguyen
Yang You
Lidong Bing
17
10
0
15 May 2023
A Survey on Long Text Modeling with Transformers
A Survey on Long Text Modeling with Transformers
Zican Dong
Tianyi Tang
Lunyi Li
Wayne Xin Zhao
VLM
13
52
0
28 Feb 2023
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Wenhan Xiong
Anchit Gupta
Shubham Toshniwal
Yashar Mehdad
Wen-tau Yih
RALM
VLM
49
30
0
21 Sep 2022
HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long
  Document Summarization
HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long Document Summarization
Shuyang Cao
Lu Wang
64
36
0
21 Mar 2022
PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document
  Summarization
PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Wen Xiao
Iz Beltagy
Giuseppe Carenini
Arman Cohan
CVBM
70
113
0
16 Oct 2021
Unsupervised Extractive Summarization by Pre-training Hierarchical
  Transformers
Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers
Shusheng Xu
Xingxing Zhang
Yi Wu
Furu Wei
Ming Zhou
57
41
0
16 Oct 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
On Extractive and Abstractive Neural Document Summarization with
  Transformer Language Models
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
Sandeep Subramanian
Raymond Li
Jonathan Pilault
C. Pal
223
212
0
07 Sep 2019
Teaching Machines to Read and Comprehend
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
Tomás Kociský
Edward Grefenstette
L. Espeholt
W. Kay
Mustafa Suleyman
Phil Blunsom
170
3,504
0
10 Jun 2015
1