ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.10808
  4. Cited By
Greenformers: Improving Computation and Memory Efficiency in Transformer
  Models via Low-Rank Approximation

Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation

24 August 2021
Samuel Cahyawijaya
ArXivPDFHTML

Papers citing "Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation"

13 / 13 papers shown
Title
Compression Barriers for Autoregressive Transformers
Compression Barriers for Autoregressive Transformers
Themistoklis Haris
Krzysztof Onak
35
1
0
21 Feb 2025
SPA: Towards A Computational Friendly Cloud-Base and On-Devices
  Collaboration Seq2seq Personalized Generation
SPA: Towards A Computational Friendly Cloud-Base and On-Devices Collaboration Seq2seq Personalized Generation
Yanming Liu
Xinyue Peng
Jiannan Cao
Le Dai
Xingzu Liu
Mingbang Wang
Weihao Liu
SyDa
36
2
0
11 Mar 2024
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech
  Emotion Recognition
Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
Samuel Cahyawijaya
Holy Lovenia
Willy Chung
Rita Frieske
Zihan Liu
Pascale Fung
37
1
0
26 Jun 2023
One Country, 700+ Languages: NLP Challenges for Underrepresented
  Languages and Dialects in Indonesia
One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia
Alham Fikri Aji
Genta Indra Winata
Fajri Koto
Samuel Cahyawijaya
Ade Romadhony
...
David Moeljadi
Radityo Eko Prasojo
Timothy Baldwin
Jey Han Lau
Sebastian Ruder
38
98
0
24 Mar 2022
Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
Samuel Cahyawijaya
Genta Indra Winata
Holy Lovenia
Bryan Wilie
Wenliang Dai
Etsuko Ishii
Pascale Fung
22
8
0
14 Sep 2021
Model Generalization on COVID-19 Fake News Detection
Model Generalization on COVID-19 Fake News Detection
Yejin Bang
Etsuko Ishii
Samuel Cahyawijaya
Ziwei Ji
Pascale Fung
29
34
0
11 Jan 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
138
183
0
31 Dec 2020
CrossNER: Evaluating Cross-Domain Named Entity Recognition
CrossNER: Evaluating Cross-Domain Named Entity Recognition
Zihan Liu
Yan Xu
Tiezheng Yu
Wenliang Dai
Ziwei Ji
Samuel Cahyawijaya
Andrea Madotto
Pascale Fung
55
141
0
08 Dec 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
249
1,982
0
28 Jul 2020
Measuring the Algorithmic Efficiency of Neural Networks
Measuring the Algorithmic Efficiency of Neural Networks
Danny Hernandez
Tom B. Brown
218
94
0
08 May 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
228
502
0
12 Mar 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
237
11,568
0
09 Mar 2017
1