ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.07999
  4. Cited By
A Multi-Level Framework for Accelerating Training Transformer Models

A Multi-Level Framework for Accelerating Training Transformer Models

7 April 2024
Longwei Zou
Han Zhang
Yangdong Deng
    AI4CE
ArXivPDFHTML

Papers citing "A Multi-Level Framework for Accelerating Training Transformer Models"

4 / 4 papers shown
Title
Speeding up Deep Model Training by Sharing Weights and Then Unsharing
Speeding up Deep Model Training by Sharing Weights and Then Unsharing
Shuo Yang
Le Hou
Xiaodan Song
Qiang Liu
Denny Zhou
110
8
0
08 Oct 2021
StackRec: Efficient Training of Very Deep Sequential Recommender Models
  by Iterative Stacking
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking
Jiachun Wang
Fajie Yuan
Jian Chen
Qingyao Wu
Min Yang
Yang Sun
Guoxiao Zhang
BDL
27
26
0
14 Dec 2020
On the Transformer Growth for Progressive BERT Training
On the Transformer Growth for Progressive BERT Training
Xiaotao Gu
Liyuan Liu
Hongkun Yu
Jing Li
C. L. P. Chen
Jiawei Han
VLM
61
51
0
23 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1