ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07959
  4. Cited By
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq
  Generation

EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation

16 February 2022
Tao Ge
Si-Qing Chen
Furu Wei
    MoE
ArXivPDFHTML

Papers citing "EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation"

20 / 20 papers shown
Title
GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on Graphs
GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on Graphs
Yi Fang
Bowen Jin
Jiacheng Shen
Sirui Ding
Qiaoyu Tan
J. Han
50
1
0
17 Feb 2025
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
Sangmin Bae
Adam Fisch
Hrayr Harutyunyan
Ziwei Ji
Seungyeon Kim
Tal Schuster
KELM
68
5
0
28 Oct 2024
ED-ViT: Splitting Vision Transformer for Distributed Inference on Edge
  Devices
ED-ViT: Splitting Vision Transformer for Distributed Inference on Edge Devices
Xiang Liu
Yijun Song
Xia Li
Yifei Sun
Huiying Lan
Zemin Liu
Linshan Jiang
Jialin Li
17
1
0
15 Oct 2024
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture
  of Shards
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
Sheng Wang
Liheng Chen
Pengan Chen
Jingwei Dong
Boyang Xue
Jiyue Jiang
Lingpeng Kong
Chuan Wu
MoE
29
7
0
01 Oct 2024
The Fire Thief Is Also the Keeper: Balancing Usability and Privacy in
  Prompts
The Fire Thief Is Also the Keeper: Balancing Usability and Privacy in Prompts
Zhili Shen
Zihang Xi
Ying He
Wei Tong
Jingyu Hua
Sheng Zhong
SILM
40
1
0
20 Jun 2024
Adapter-X: A Novel General Parameter-Efficient Fine-Tuning Framework for
  Vision
Adapter-X: A Novel General Parameter-Efficient Fine-Tuning Framework for Vision
Minglei Li
Peng Ye
Yongqi Huang
Lin Zhang
Tao Chen
Tong He
Jiayuan Fan
Wanli Ouyang
MoE
32
4
0
05 Jun 2024
SPA: Towards A Computational Friendly Cloud-Base and On-Devices
  Collaboration Seq2seq Personalized Generation
SPA: Towards A Computational Friendly Cloud-Base and On-Devices Collaboration Seq2seq Personalized Generation
Yanming Liu
Xinyue Peng
Jiannan Cao
Le Dai
Xingzu Liu
Mingbang Wang
Weihao Liu
SyDa
36
2
0
11 Mar 2024
PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA
PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA
Sheng Wang
Boyang Xue
Jiacheng Ye
Jiyue Jiang
Liheng Chen
Lingpeng Kong
Chuan Wu
25
13
0
24 Feb 2024
PartialFormer: Modeling Part Instead of Whole for Machine Translation
PartialFormer: Modeling Part Instead of Whole for Machine Translation
Tong Zheng
Bei Li
Huiwen Bao
Jiale Wang
Weiqiao Shan
Tong Xiao
Jingbo Zhu
MoE
AI4CE
11
0
0
23 Oct 2023
One Wide Feedforward is All You Need
One Wide Feedforward is All You Need
Telmo Pires
António V. Lopes
Yannick Assogba
Hendra Setiawan
27
12
0
04 Sep 2023
MobileNMT: Enabling Translation in 15MB and 30ms
MobileNMT: Enabling Translation in 15MB and 30ms
Ye Lin
Xiaohui Wang
Zhexi Zhang
Mingxuan Wang
Tong Xiao
Jingbo Zhu
MQ
25
1
0
07 Jun 2023
Rediscovering Hashed Random Projections for Efficient Quantization of
  Contextualized Sentence Embeddings
Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings
Ulf A. Hamster
Ji-Ung Lee
Alexander Geyken
Iryna Gurevych
16
0
0
13 Mar 2023
Too Brittle To Touch: Comparing the Stability of Quantization and
  Distillation Towards Developing Lightweight Low-Resource MT Models
Too Brittle To Touch: Comparing the Stability of Quantization and Distillation Towards Developing Lightweight Low-Resource MT Models
Harshita Diddee
Sandipan Dandapat
Monojit Choudhury
T. Ganu
Kalika Bali
27
5
0
27 Oct 2022
Real-time Speech Interruption Analysis: From Cloud to Client Deployment
Real-time Speech Interruption Analysis: From Cloud to Client Deployment
Quchen Fu
Szu-Wei Fu
Yaran Fan
Yu-Huan Wu
Zhuo Chen
J. Gupchup
Ross Cutler
26
0
0
24 Oct 2022
MiniALBERT: Model Distillation via Parameter-Efficient Recursive
  Transformers
MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers
Mohammadmahdi Nouriborji
Omid Rohanian
Samaneh Kouchaki
David A. Clifton
24
8
0
12 Oct 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
17
10
0
29 Jun 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Beyond Fully-Connected Layers with Quaternions: Parameterization of
  Hypercomplex Multiplications with $1/n$ Parameters
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n1/n1/n Parameters
Aston Zhang
Yi Tay
Shuai Zhang
Alvin Chan
A. Luu
S. Hui
Jie Fu
MQ
169
83
0
17 Feb 2021
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
1