Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.02144
Cited By
Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
3 December 2020
Wen Xiao
Patrick Huber
Giuseppe Carenini
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !"
7 / 7 papers shown
Title
Incorporating Distributions of Discourse Structure for Long Document Abstractive Summarization
Dongqi Pu
Yifa Wang
Vera Demberg
29
21
0
26 May 2023
Towards Domain-Independent Supervised Discourse Parsing Through Gradient Boosting
Patrick Huber
Giuseppe Carenini
21
0
0
18 Oct 2022
W-RST: Towards a Weighted RST-style Discourse Framework
Patrick Huber
Wen Xiao
Giuseppe Carenini
24
3
0
04 Jun 2021
Predicting Discourse Trees from Transformer-based Neural Summarizers
Wen Xiao
Patrick Huber
Giuseppe Carenini
21
14
0
14 Apr 2021
Unsupervised Learning of Discourse Structures using a Tree Autoencoder
Patrick Huber
Giuseppe Carenini
26
4
0
17 Dec 2020
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
256
1,431
0
22 Aug 2019
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,923
0
17 Aug 2015
1