ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08181
  4. Cited By
Direction is what you need: Improving Word Embedding Compression in
  Large Language Models
v1v2 (latest)

Direction is what you need: Improving Word Embedding Compression in Large Language Models

15 June 2021
Klaudia Bałazy
Mohammadreza Banaei
R. Lebret
Jacek Tabor
Karl Aberer
ArXiv (abs)PDFHTML

Papers citing "Direction is what you need: Improving Word Embedding Compression in Large Language Models"

3 / 3 papers shown
Semantic Compression for Word and Sentence Embeddings using Discrete Wavelet Transform
Semantic Compression for Word and Sentence Embeddings using Discrete Wavelet Transform
Rana Aref Salama
Abdou Youssef
Mona Diab
140
0
0
31 Jul 2025
TensorSLM: Energy-efficient Embedding Compression of Sub-billion Parameter Language Models on Low-end Devices
TensorSLM: Energy-efficient Embedding Compression of Sub-billion Parameter Language Models on Low-end Devices
Mingxue Xu
Y. Xu
Danilo Mandic
187
0
0
16 Jun 2025
Revisiting Offline Compression: Going Beyond Factorization-based Methods
  for Transformer Language Models
Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language ModelsFindings (Findings), 2023
Mohammadreza Banaei
Klaudia Bałazy
Artur Kasymov
R. Lebret
Jacek Tabor
Karl Aberer
OffRL
154
1
0
08 Feb 2023
1
Page 1 of 1