ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.12818
  4. Cited By
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
  Pre-trained Language Models

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

19 October 2023
Weize Chen
Xiaoyue Xu
Xu Han
Yankai Lin
Ruobing Xie
Zhiyuan Liu
Maosong Sun
Jie Zhou
ArXivPDFHTML

Papers citing "Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models"

2 / 2 papers shown
Title
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
256
1,996
0
31 Dec 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1