ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13461
  4. Cited By
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Annual Meeting of the Association for Computational Linguistics (ACL), 2019
29 October 2019
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
    AIMatVLM
ArXiv (abs)PDFHTML

Papers citing "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"

49 / 5,149 papers shown
Recipes for building an open-domain chatbot
Recipes for building an open-domain chatbotConference of the European Chapter of the Association for Computational Linguistics (EACL), 2020
Stephen Roller
Emily Dinan
Naman Goyal
Da Ju
Mary Williamson
...
Myle Ott
Kurt Shuster
Eric Michael Smith
Y-Lan Boureau
Jason Weston
ALM
530
1,085
0
28 Apr 2020
Lite Transformer with Long-Short Range Attention
Lite Transformer with Long-Short Range AttentionInternational Conference on Learning Representations (ICLR), 2020
Zhanghao Wu
Zhijian Liu
Ji Lin
Chengyue Wu
Song Han
178
364
0
24 Apr 2020
QURIOUS: Question Generation Pretraining for Text Generation
QURIOUS: Question Generation Pretraining for Text Generation
Shashi Narayan
Gonçalo Simães
Ji Ma
Hannah Craighead
Ryan T. McDonald
161
16
0
23 Apr 2020
AmbigQA: Answering Ambiguous Open-domain Questions
AmbigQA: Answering Ambiguous Open-domain Questions
Sewon Min
Julian Michael
Hannaneh Hajishirzi
Luke Zettlemoyer
418
397
0
22 Apr 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALMVLM
682
4,902
0
10 Apr 2020
Dense Passage Retrieval for Open-Domain Question Answering
Dense Passage Retrieval for Open-Domain Question AnsweringConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Vladimir Karpukhin
Barlas Oğuz
Sewon Min
Patrick Lewis
Ledell Yu Wu
Sergey Edunov
Danqi Chen
Anuj Kumar
RALM
637
4,807
0
10 Apr 2020
Improving Readability for Automatic Speech Recognition Transcription
Improving Readability for Automatic Speech Recognition Transcription
Junwei Liao
Sefik Emre Eskimez
Liyang Lu
Yu Shi
Ming Gong
Linjun Shou
Hong Qu
Michael Zeng
143
60
0
09 Apr 2020
Asking and Answering Questions to Evaluate the Factual Consistency of
  Summaries
Asking and Answering Questions to Evaluate the Factual Consistency of SummariesAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Alex Jinpeng Wang
Dong Wang
M. Lewis
HILM
289
538
0
08 Apr 2020
Transfer learning and subword sampling for asymmetric-resource
  one-to-many neural translation
Transfer learning and subword sampling for asymmetric-resource one-to-many neural translationMachine Translation (MT), 2020
Stig-Arne Gronroos
Sami Virpioja
M. Kurimo
247
7
0
08 Apr 2020
Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders
Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders
Nikola I. Nikolov
Eric Malmi
Curtis G. Northcutt
Loreto Parisi
AI4CE
215
6
0
08 Apr 2020
Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space
Optimus: Organizing Sentences via Pre-trained Modeling of a Latent SpaceConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Chunyuan Li
Xiang Gao
Yuan Li
Baolin Peng
Xiujun Li
Yizhe Zhang
Jianfeng Gao
SSLDRL
466
194
0
05 Apr 2020
A Hierarchical Network for Abstractive Meeting Summarization with
  Cross-Domain Pretraining
A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining
Chenguang Zhu
Ruochen Xu
Michael Zeng
Xuedong Huang
BDLAI4TS
450
18
0
04 Apr 2020
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training,
  Understanding and Generation
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Yaobo Liang
Nan Duan
Yeyun Gong
Ning Wu
Fenfei Guo
...
Shuguang Liu
Fan Yang
Daniel Fernando Campos
Rangan Majumder
Ming Zhou
ELMVLM
310
370
0
03 Apr 2020
Abstractive Summarization with Combination of Pre-trained
  Sequence-to-Sequence and Saliency Models
Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Itsumi Saito
Kyosuke Nishida
Kosuke Nishida
J. Tomita
146
31
0
29 Mar 2020
Felix: Flexible Text Editing Through Tagging and Insertion
Felix: Flexible Text Editing Through Tagging and InsertionFindings (Findings), 2020
Jonathan Mallinson
Aliaksei Severyn
Eric Malmi
Guillermo Garrido
181
81
0
24 Mar 2020
Enhancing Factual Consistency of Abstractive Summarization
Enhancing Factual Consistency of Abstractive Summarization
Chenguang Zhu
William Fu-Hinthorn
Ruochen Xu
Qingkai Zeng
Michael Zeng
Xuedong Huang
Meng Jiang
HILMKELM
509
45
0
19 Mar 2020
TTTTTackling WinoGrande Schemas
TTTTTackling WinoGrande Schemas
Sheng-Chieh Lin
Jheng-Hong Yang
Rodrigo Nogueira
Ming-Feng Tsai
Chuan-Ju Wang
Jimmy Lin
88
6
0
18 Mar 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A SurveyScience China Technological Sciences (Sci China Technol Sci), 2020
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MAVLM
1.1K
1,623
0
18 Mar 2020
A Survey on Contextual Embeddings
A Survey on Contextual Embeddings
Qi Liu
Matt J. Kusner
Phil Blunsom
447
169
0
16 Mar 2020
Document Ranking with a Pretrained Sequence-to-Sequence Model
Document Ranking with a Pretrained Sequence-to-Sequence ModelFindings (Findings), 2020
Rodrigo Nogueira
Zhiying Jiang
Jimmy J. Lin
279
684
0
14 Mar 2020
Sentence Analogies: Exploring Linguistic Relationships and Regularities
  in Sentence Embeddings
Sentence Analogies: Exploring Linguistic Relationships and Regularities in Sentence Embeddings
Xunjie Zhu
Gerard de Melo
NAI
135
12
0
09 Mar 2020
The growing amplification of social media: Measuring temporal and social
  contagion dynamics for over 150 languages on Twitter for 2009-2020
The growing amplification of social media: Measuring temporal and social contagion dynamics for over 150 languages on Twitter for 2009-2020
Thayer Alshaabi
D. R. Dewhurst
J. Minot
M. V. Arnold
J. L. Adams
C. Danforth
P. Dodds
513
19
0
07 Mar 2020
XGPT: Cross-modal Generative Pre-Training for Image Captioning
XGPT: Cross-modal Generative Pre-Training for Image CaptioningNatural Language Processing and Chinese Computing (NLPCC), 2020
Qiaolin Xia
Haoyang Huang
Nan Duan
Dongdong Zhang
Lei Ji
Zhifang Sui
Edward Cui
Taroon Bharti
Xin Liu
Ming Zhou
MLLMVLM
243
84
0
03 Mar 2020
UniLMv2: Pseudo-Masked Language Models for Unified Language Model
  Pre-Training
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-TrainingInternational Conference on Machine Learning (ICML), 2020
Hangbo Bao
Li Dong
Furu Wei
Wenhui Wang
Nan Yang
...
Yu Wang
Songhao Piao
Jianfeng Gao
Ming Zhou
H. Hon
AI4CE
182
417
0
28 Feb 2020
A Primer in BERTology: What we know about how BERT works
A Primer in BERTology: What we know about how BERT worksTransactions of the Association for Computational Linguistics (TACL), 2020
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
474
1,717
0
27 Feb 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
  of Pre-Trained Transformers
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained TransformersNeural Information Processing Systems (NeurIPS), 2020
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
1.3K
1,757
0
25 Feb 2020
Modelling Latent Skills for Multitask Language Generation
Modelling Latent Skills for Multitask Language Generation
Kris Cao
Dani Yogatama
140
3
0
21 Feb 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural LanguagesFindings (Findings), 2020
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
1.2K
3,386
0
19 Feb 2020
Learning by Semantic Similarity Makes Abstractive Summarization Better
Learning by Semantic Similarity Makes Abstractive Summarization Better
Wonjin Yoon
Yoonsun Yeo
Minbyul Jeong
Bong-Jun Yi
Jaewoo Kang
234
18
0
18 Feb 2020
UniVL: A Unified Video and Language Pre-Training Model for Multimodal
  Understanding and Generation
UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
Huaishao Luo
Lei Ji
Ding Wang
Haoyang Huang
Nan Duan
Tianrui Li
Jason Li
Xilin Chen
Ming Zhou
VLM
375
417
0
15 Feb 2020
REALM: Retrieval-Augmented Language Model Pre-Training
REALM: Retrieval-Augmented Language Model Pre-TrainingInternational Conference on Machine Learning (ICML), 2020
Kelvin Guu
Kenton Lee
Zora Tung
Panupong Pasupat
Ming-Wei Chang
RALM
1.2K
2,592
0
10 Feb 2020
A Multilingual View of Unsupervised Machine Translation
A Multilingual View of Unsupervised Machine TranslationFindings (Findings), 2020
Xavier Garcia
Pierre Foret
Thibault Sellam
Ankur P. Parikh
249
37
0
07 Feb 2020
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework
  for Natural Language Generation
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language GenerationInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Dongling Xiao
Han Zhang
Yukun Li
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
215
132
0
26 Jan 2020
Multilingual Denoising Pre-training for Neural Machine Translation
Multilingual Denoising Pre-training for Neural Machine TranslationTransactions of the Association for Computational Linguistics (TACL), 2020
Yinhan Liu
Jiatao Gu
Naman Goyal
Xian Li
Sergey Edunov
Marjan Ghazvininejad
M. Lewis
Luke Zettlemoyer
AI4CEAIMat
896
1,982
0
22 Jan 2020
Length-controllable Abstractive Summarization by Guiding with Summary
  Prototype
Length-controllable Abstractive Summarization by Guiding with Summary Prototype
Itsumi Saito
Kyosuke Nishida
Kosuke Nishida
Atsushi Otsuka
Hisako Asano
J. Tomita
Hiroyuki Shindo
Yuji Matsumoto
248
37
0
21 Jan 2020
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language ModelFindings (Findings), 2020
Pieter Delobelle
Thomas Winters
Bettina Berendt
198
262
0
17 Jan 2020
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence
  Pre-training
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-trainingFindings (Findings), 2020
Weizhen Qi
Yu Yan
Yeyun Gong
Dayiheng Liu
Nan Duan
Jiusheng Chen
Ruofei Zhang
Ming Zhou
AI4TS
367
472
0
13 Jan 2020
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
  Summarization
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationInternational Conference on Machine Learning (ICML), 2019
Jingqing Zhang
Yao-Min Zhao
Mohammad Saleh
Peter J. Liu
RALM3DGS
842
2,310
0
18 Dec 2019
A Survey on Document-level Neural Machine Translation: Methods and
  Evaluation
A Survey on Document-level Neural Machine Translation: Methods and Evaluation
Sameen Maruf
Fahimeh Saleh
Gholamreza Haffari
AI4TS
230
25
0
18 Dec 2019
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art
  Baseline
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art BaselineEuropean Conference on Computer Vision (ECCV), 2019
Vishvak Murahari
Dhruv Batra
Devi Parikh
Abhishek Das
VLM
349
120
0
05 Dec 2019
Machines Getting with the Program: Understanding Intent Arguments of
  Non-Canonical Directives
Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical DirectivesFindings (Findings), 2019
Won Ik Cho
Y. Moon
Sangwhan Moon
Seokhwan Kim
N. Kim
133
6
0
01 Dec 2019
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood
  Training
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood TrainingAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Margaret Li
Stephen Roller
Ilia Kulikov
Sean Welleck
Y-Lan Boureau
Dong Wang
Jason Weston
277
198
0
10 Nov 2019
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded
  Conversational Agents
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational AgentsAnnual Meeting of the Association for Computational Linguistics (ACL), 2019
Kurt Shuster
Da Ju
Stephen Roller
Emily Dinan
Y-Lan Boureau
Jason Weston
261
84
0
09 Nov 2019
CommonGen: A Constrained Text Generation Challenge for Generative
  Commonsense Reasoning
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning
Bill Yuchen Lin
Wangchunshu Zhou
Minghan Shen
Pei Zhou
Chandra Bhagavatula
Yu Xing
Xiang Ren
LRM
376
16
0
09 Nov 2019
Conditional Text Generation for Harmonious Human-Machine Interaction
Conditional Text Generation for Harmonious Human-Machine Interaction
Bin Guo
Hao Wang
Yasan Ding
Wei Wu
Shaoyang Hao
Yueqi Sun
Zhiwen Yu
184
4
0
08 Sep 2019
Global Entity Disambiguation with BERT
Global Entity Disambiguation with BERTNorth American Chapter of the Association for Computational Linguistics (NAACL), 2019
Ikuya Yamada
Koki Washio
Hiroyuki Shindo
Yuji Matsumoto
444
37
0
01 Sep 2019
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Leveraging Pre-trained Checkpoints for Sequence Generation TasksTransactions of the Association for Computational Linguistics (TACL), 2019
S. Rothe
Shashi Narayan
Aliaksei Severyn
SILM
325
459
0
29 Jul 2019
An Attentive Survey of Attention Models
An Attentive Survey of Attention Models
S. Chaudhari
Varun Mithal
Gungor Polatkan
R. Ramanath
409
722
0
05 Apr 2019
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
Tian Shi
Yaser Keneshloo
Naren Ramakrishnan
Chandan K. Reddy
408
252
0
05 Dec 2018
Previous
123...101102103