ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.09695
  4. Cited By
LoRAP: Transformer Sub-Layers Deserve Differentiated Structured
  Compression for Large Language Models

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

15 April 2024
Guangyan Li
Yongqiang Tang
Wensheng Zhang
ArXivPDFHTML

Papers citing "LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models"

6 / 6 papers shown
Title
Spectral-Aware Low-Rank Adaptation for Speaker Verification
Spectral-Aware Low-Rank Adaptation for Speaker Verification
Zhe Li
Man-Wai Mak
Mert Pilanci
Hung-yi Lee
H. Meng
41
0
0
07 Jan 2025
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
Stephen Zhang
V. Papyan
VLM
38
1
0
20 Sep 2024
LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition
LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition
Soroush Oraki
Harry Zhuang
Jie Liang
39
1
0
19 Jul 2024
SCOTT: Self-Consistent Chain-of-Thought Distillation
SCOTT: Self-Consistent Chain-of-Thought Distillation
Jamie Yap
Zhengyang Wang
Zheng Li
K. Lynch
Bing Yin
Xiang Ren
LRM
57
91
0
03 May 2023
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
240
1,070
0
05 Oct 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
1