ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.09694
  4. Cited By
Efficient pre-training objectives for Transformers

Efficient pre-training objectives for Transformers

20 April 2021
Luca Di Liello
Matteo Gabburo
Alessandro Moschitti
ArXiv (abs)PDFHTML

Papers citing "Efficient pre-training objectives for Transformers"

3 / 3 papers shown
Title
Revisiting Pre-trained Language Models and their Evaluation for Arabic
  Natural Language Understanding
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Abbas Ghaddar
Yimeng Wu
Sunyam Bagga
Ahmad Rashid
Khalil Bibi
...
Zhefeng Wang
Baoxing Huai
Xin Jiang
Qun Liu
Philippe Langlais
65
7
0
21 May 2022
AMMUS : A Survey of Transformer-based Pretrained Models in Natural
  Language Processing
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
Katikapalli Subramanyam Kalyan
A. Rajasekharan
S. Sangeetha
VLMLM&MA
111
270
0
12 Aug 2021
Transfer training from smaller language model
Transfer training from smaller language model
Han Zhang
56
0
0
23 Apr 2021
1