ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.13640
  4. Cited By
s2s-ft: Fine-Tuning Pretrained Transformer Encoders for
  Sequence-to-Sequence Learning

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

26 October 2021
Hangbo Bao
Li Dong
Wenhui Wang
Nan Yang
Furu Wei
ArXivPDFHTML

Papers citing "s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning"

5 / 5 papers shown
Title
GanLM: Encoder-Decoder Pre-training with an Auxiliary Discriminator
GanLM: Encoder-Decoder Pre-training with an Auxiliary Discriminator
Jian Yang
Shuming Ma
Li Dong
Shaohan Huang
Haoyang Huang
Yuwei Yin
Dongdong Zhang
Liqun Yang
Furu Wei
Zhoujun Li
SyDa
AI4CE
32
25
0
20 Dec 2022
Image as a Foreign Language: BEiT Pretraining for All Vision and
  Vision-Language Tasks
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Wenhui Wang
Hangbo Bao
Li Dong
Johan Bjorck
Zhiliang Peng
...
Kriti Aggarwal
O. Mohammed
Saksham Singhal
Subhojit Som
Furu Wei
MLLM
VLM
ViT
49
629
0
22 Aug 2022
Exploiting Global and Local Hierarchies for Hierarchical Text
  Classification
Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Ting Jiang
Deqing Wang
Leilei Sun
Zhongzhi Chen
Fuzhen Zhuang
Qinghong Yang
8
28
0
05 May 2022
Enhance Incomplete Utterance Restoration by Joint Learning Token
  Extraction and Text Generation
Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation
Shumpei Inoue
Tsun-Jui Liu
Nguyen Hong Son
Minh Le Nguyen
33
17
0
08 Apr 2022
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
254
1,430
0
22 Aug 2019
1