ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.04953
  4. Cited By
Does Pretraining for Summarization Require Knowledge Transfer?

Does Pretraining for Summarization Require Knowledge Transfer?

10 September 2021
Kundan Krishna
Jeffrey P. Bigham
Zachary Chase Lipton
ArXivPDFHTML

Papers citing "Does Pretraining for Summarization Require Knowledge Transfer?"

11 / 11 papers shown
Title
Responsible AI Considerations in Text Summarization Research: A Review
  of Current Practices
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
Yu Lu Liu
Meng Cao
Su Lin Blodgett
Jackie Chi Kit Cheung
Alexandra Olteanu
Adam Trischler
23
1
0
18 Nov 2023
On the Role of Parallel Data in Cross-lingual Transfer Learning
On the Role of Parallel Data in Cross-lingual Transfer Learning
Machel Reid
Mikel Artetxe
21
10
0
20 Dec 2022
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Kundan Krishna
Saurabh Garg
Jeffrey P. Bigham
Zachary Chase Lipton
38
30
0
28 Sep 2022
MonoByte: A Pool of Monolingual Byte-level Language Models
MonoByte: A Pool of Monolingual Byte-level Language Models
Hugo Queiroz Abonizio
Leandro Rodrigues de Souza
R. Lotufo
Rodrigo Nogueira
23
1
0
22 Sep 2022
Insights into Pre-training via Simpler Synthetic Tasks
Insights into Pre-training via Simpler Synthetic Tasks
Yuhuai Wu
Felix Li
Percy Liang
AIMat
24
20
0
21 Jun 2022
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
  Understanding and Generation
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
36
27
0
30 May 2022
Fusing finetuned models for better pretraining
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
41
86
0
06 Apr 2022
Measuring the Impact of Individual Domain Factors in Self-Supervised
  Pre-Training
Measuring the Impact of Individual Domain Factors in Self-Supervised Pre-Training
Ramon Sanabria
Wei-Ning Hsu
Alexei Baevski
Michael Auli
19
7
0
01 Mar 2022
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Alaaeldin El-Nouby
Gautier Izacard
Hugo Touvron
Ivan Laptev
Hervé Jégou
Edouard Grave
SSL
18
148
0
20 Dec 2021
ScisummNet: A Large Annotated Corpus and Content-Impact Models for
  Scientific Paper Summarization with Citation Networks
ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
Michihiro Yasunaga
Jungo Kasai
Rui Zhang
Alexander R. Fabbri
Irene Z Li
Dan Friedman
Dragomir R. Radev
68
206
0
04 Sep 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
254
1,431
0
22 Aug 2019
1