ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13461
  4. Cited By
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

29 October 2019
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
    AIMat
    VLM
ArXivPDFHTML

Papers citing "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"

41 / 4,141 papers shown
Title
Conversational Semantic Parsing
Conversational Semantic Parsing
Armen Aghajanyan
Jean Maillard
Akshat Shrivastava
K. Diedrick
Mike Haeger
...
Yashar Mehdad
Ves Stoyanov
Anuj Kumar
M. Lewis
S. Gupta
11
48
0
28 Sep 2020
KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense
  Reasoning
KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning
Ye Liu
Yao Wan
Lifang He
Hao Peng
Philip S. Yu
21
188
0
26 Sep 2020
Generation-Augmented Retrieval for Open-domain Question Answering
Generation-Augmented Retrieval for Open-domain Question Answering
Yuning Mao
Pengcheng He
Xiaodong Liu
Yelong Shen
Jianfeng Gao
Jiawei Han
Weizhu Chen
RALM
19
237
0
17 Sep 2020
Code-switching pre-training for neural machine translation
Code-switching pre-training for neural machine translation
Zhen Yang
Bojie Hu
Ambyera Han
Shen Huang
Qi Ju
19
71
0
17 Sep 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
  Learners
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
15
951
0
15 Sep 2020
Global-aware Beam Search for Neural Abstractive Summarization
Global-aware Beam Search for Neural Abstractive Summarization
Ye Ma
Zixun Lan
Lu Zong
Kaizhu Huang
20
12
0
15 Sep 2020
GeDi: Generative Discriminator Guided Sequence Generation
GeDi: Generative Discriminator Guided Sequence Generation
Ben Krause
Akhilesh Deepak Gotmare
Bryan McCann
N. Keskar
Shafiq R. Joty
R. Socher
Nazneen Rajani
35
389
0
14 Sep 2020
Query Focused Multi-document Summarisation of Biomedical Texts
Query Focused Multi-document Summarisation of Biomedical Texts
Diego Mollá Aliod
Christopher Jones
Vincent Nguyen
20
12
0
27 Aug 2020
A Baseline Analysis for Podcast Abstractive Summarization
A Baseline Analysis for Podcast Abstractive Summarization
Chujie Zheng
Harry J. Wang
Kunpeng Zhang
Ling Fan
16
12
0
24 Aug 2020
Multilingual Translation with Extensible Multilingual Pretraining and
  Finetuning
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning
Y. Tang
C. Tran
Xian Li
Peng-Jen Chen
Naman Goyal
Vishrav Chaudhary
Jiatao Gu
Angela Fan
CLL
45
445
0
02 Aug 2020
SummEval: Re-evaluating Summarization Evaluation
SummEval: Re-evaluating Summarization Evaluation
Alexander R. Fabbri
Wojciech Kry'sciñski
Bryan McCann
Caiming Xiong
R. Socher
Dragomir R. Radev
HILM
38
684
0
24 Jul 2020
Self-Supervised Representations Improve End-to-End Speech Translation
Self-Supervised Representations Improve End-to-End Speech Translation
Anne Wu
Changhan Wang
J. Pino
Jiatao Gu
SSL
17
40
0
22 Jun 2020
Dataset for Automatic Summarization of Russian News
Dataset for Automatic Summarization of Russian News
I. Gusev
11
24
0
19 Jun 2020
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine
  Translation
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation
Jungo Kasai
Nikolaos Pappas
Hao Peng
James Cross
Noah A. Smith
30
134
0
18 Jun 2020
Cross-lingual Retrieval for Iterative Self-Supervised Training
Cross-lingual Retrieval for Iterative Self-Supervised Training
C. Tran
Y. Tang
Xian Li
Jiatao Gu
RALM
28
72
0
16 Jun 2020
To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on
  Resource Rich Tasks
To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks
Sinong Wang
Madian Khabsa
Hao Ma
8
26
0
15 Jun 2020
Revisiting Few-sample BERT Fine-tuning
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
30
441
0
10 Jun 2020
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
22
1,643
0
08 Jun 2020
Unsupervised Translation of Programming Languages
Unsupervised Translation of Programming Languages
Marie-Anne Lachaux
Baptiste Roziere
L. Chanussot
Guillaume Lample
34
408
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
15
39,839
0
28 May 2020
Predict-then-Decide: A Predictive Approach for Wait or Answer Task in
  Dialogue Systems
Predict-then-Decide: A Predictive Approach for Wait or Answer Task in Dialogue Systems
Zehao Lin
Shaobo Cui
Guodun Li
Xiaoming Kang
Feng Ji
Feng-Lin Li
Zhongzhou Zhao
Haiqing Chen
Yin Zhang
34
1
0
27 May 2020
Question-Driven Summarization of Answers to Consumer Health Questions
Question-Driven Summarization of Answers to Consumer Health Questions
Max E. Savery
Asma Ben Abacha
Soumya Gayen
Dina Demner-Fushman
20
78
0
18 May 2020
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
  Cloze Reward
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward
Luyang Huang
Lingfei Wu
Lu Wang
RALM
22
162
0
03 May 2020
RICA: Evaluating Robust Inference Capabilities Based on Commonsense
  Axioms
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
Pei Zhou
Rahul Khanna
Seyeon Lee
Bill Yuchen Lin
Daniel E. Ho
Jay Pujara
Xiang Ren
ReLM
16
36
0
02 May 2020
MUSS: Multilingual Unsupervised Sentence Simplification by Mining
  Paraphrases
MUSS: Multilingual Unsupervised Sentence Simplification by Mining Paraphrases
Louis Martin
Angela Fan
Eric Villemonte de la Clergerie
Antoine Bordes
Benoît Sagot
10
36
0
01 May 2020
QURIOUS: Question Generation Pretraining for Text Generation
QURIOUS: Question Generation Pretraining for Text Generation
Shashi Narayan
Gonçalo Simães
Ji Ma
Hannah Craighead
Ryan T. McDonald
21
15
0
23 Apr 2020
AmbigQA: Answering Ambiguous Open-domain Questions
AmbigQA: Answering Ambiguous Open-domain Questions
Sewon Min
Julian Michael
Hannaneh Hajishirzi
Luke Zettlemoyer
10
291
0
22 Apr 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
28
3,904
0
10 Apr 2020
Asking and Answering Questions to Evaluate the Factual Consistency of
  Summaries
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
Alex Jinpeng Wang
Kyunghyun Cho
M. Lewis
HILM
10
467
0
08 Apr 2020
A Hierarchical Network for Abstractive Meeting Summarization with
  Cross-Domain Pretraining
A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining
Chenguang Zhu
Ruochen Xu
Michael Zeng
Xuedong Huang
BDL
AI4TS
18
18
0
04 Apr 2020
Enhancing Factual Consistency of Abstractive Summarization
Enhancing Factual Consistency of Abstractive Summarization
Chenguang Zhu
William Fu-Hinthorn
Ruochen Xu
Qingkai Zeng
Michael Zeng
Xuedong Huang
Meng-Long Jiang
HILM
KELM
185
39
0
19 Mar 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
241
1,450
0
18 Mar 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
  of Pre-Trained Transformers
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
43
1,198
0
25 Feb 2020
REALM: Retrieval-Augmented Language Model Pre-Training
REALM: Retrieval-Augmented Language Model Pre-Training
Kelvin Guu
Kenton Lee
Zora Tung
Panupong Pasupat
Ming-Wei Chang
RALM
13
1,987
0
10 Feb 2020
Multilingual Denoising Pre-training for Neural Machine Translation
Multilingual Denoising Pre-training for Neural Machine Translation
Yinhan Liu
Jiatao Gu
Naman Goyal
Xian Li
Sergey Edunov
Marjan Ghazvininejad
M. Lewis
Luke Zettlemoyer
AI4CE
AIMat
17
1,768
0
22 Jan 2020
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
10
232
0
17 Jan 2020
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
  Summarization
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Jingqing Zhang
Yao-Min Zhao
Mohammad Saleh
Peter J. Liu
RALM
3DGS
41
2,008
0
18 Dec 2019
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded
  Conversational Agents
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
Kurt Shuster
Da Ju
Stephen Roller
Emily Dinan
Y-Lan Boureau
Jason Weston
12
81
0
09 Nov 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
254
1,430
0
22 Aug 2019
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
S. Rothe
Shashi Narayan
Aliaksei Severyn
SILM
57
433
0
29 Jul 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
Previous
123...818283