ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.00641
  4. Cited By
Online Embedding Compression for Text Classification using Low Rank
  Matrix Factorization

Online Embedding Compression for Text Classification using Low Rank Matrix Factorization

1 November 2018
Anish Acharya
Rahul Goel
A. Metallinou
Inderjit Dhillon
ArXivPDFHTML

Papers citing "Online Embedding Compression for Text Classification using Low Rank Matrix Factorization"

30 / 30 papers shown
Title
RWKV-Lite: Deeply Compressed RWKV for Resource-Constrained Devices
RWKV-Lite: Deeply Compressed RWKV for Resource-Constrained Devices
Wonkyo Choe
Yangfeng Ji
F. Lin
77
0
0
14 Dec 2024
Improving embedding with contrastive fine-tuning on small datasets with
  expert-augmented scores
Improving embedding with contrastive fine-tuning on small datasets with expert-augmented scores
Jun Lu
David Li
Bill Ding
Yu Kang
64
3
0
19 Aug 2024
Basis Selection: Low-Rank Decomposition of Pretrained Large Language
  Models for Target Applications
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li
Changsheng Zhao
Hyungtak Lee
Ernie Chang
Yangyang Shi
Vikas Chandra
37
0
0
24 May 2024
Bias Mitigation in Fine-tuning Pre-trained Models for Enhanced Fairness
  and Efficiency
Bias Mitigation in Fine-tuning Pre-trained Models for Enhanced Fairness and Efficiency
Yixuan Zhang
Feng Zhou
21
3
0
01 Mar 2024
Combining Explicit and Implicit Regularization for Efficient Learning in
  Deep Networks
Combining Explicit and Implicit Regularization for Efficient Learning in Deep Networks
Dan Zhao
19
5
0
01 Jun 2023
Embedding Compression for Text Classification Using Dictionary Screening
Embedding Compression for Text Classification Using Dictionary Screening
Jing Zhou
Xinru Jing
Mu Liu
Hansheng Wang
23
0
0
23 Nov 2022
Numerical Optimizations for Weighted Low-rank Estimation on Language
  Model
Numerical Optimizations for Weighted Low-rank Estimation on Language Model
Ting Hua
Yen-Chang Hsu
Felicity Wang
Qiang Lou
Yilin Shen
Hongxia Jin
27
13
0
02 Nov 2022
MorphTE: Injecting Morphology in Tensorized Embeddings
MorphTE: Injecting Morphology in Tensorized Embeddings
Guobing Gan
Peng Zhang
Sunzhu Li
Xiuqing Lu
Benyou Wang
36
5
0
27 Oct 2022
Survey: Exploiting Data Redundancy for Optimization of Deep Learning
Survey: Exploiting Data Redundancy for Optimization of Deep Learning
Jou-An Chen
Wei Niu
Bin Ren
Yanzhi Wang
Xipeng Shen
23
24
0
29 Aug 2022
Language model compression with weighted low-rank factorization
Language model compression with weighted low-rank factorization
Yen-Chang Hsu
Ting Hua
Sung-En Chang
Qiang Lou
Yilin Shen
Hongxia Jin
16
93
0
30 Jun 2022
Word Tour: One-dimensional Word Embeddings via the Traveling Salesman
  Problem
Word Tour: One-dimensional Word Embeddings via the Traveling Salesman Problem
Ryoma Sato
19
0
0
04 May 2022
Doping: A technique for efficient compression of LSTM models using
  sparse structured additive matrices
Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Urmish Thakker
P. Whatmough
Zhi-Gang Liu
Matthew Mattina
Jesse G. Beu
16
6
0
14 Feb 2021
Fast Exploration of Weight Sharing Opportunities for CNN Compression
Fast Exploration of Weight Sharing Opportunities for CNN Compression
Etienne Dupuis
D. Novo
Ian O'Connor
A. Bosio
21
1
0
02 Feb 2021
Extreme Model Compression for On-device Natural Language Understanding
Extreme Model Compression for On-device Natural Language Understanding
Kanthashree Mysore Sathyendra
Samridhi Choudhary
Leah Nicolich-Henkin
21
9
0
30 Nov 2020
Weight Squeezing: Reparameterization for Knowledge Transfer and Model
  Compression
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression
Artem Chumachenko
Daniil Gavrilov
Nikita Balagansky
Pavel Kalaidin
13
0
0
14 Oct 2020
Deep Learning Meets Projective Clustering
Deep Learning Meets Projective Clustering
Alaa Maalouf
Harry Lang
Daniela Rus
Dan Feldman
24
9
0
08 Oct 2020
Rank and run-time aware compression of NLP Applications
Rank and run-time aware compression of NLP Applications
Urmish Thakker
Jesse G. Beu
Dibakar Gope
Ganesh S. Dasika
Matthew Mattina
16
11
0
06 Oct 2020
Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank
  Approximation
Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation
M. Tukan
Alaa Maalouf
Matan Weksler
Dan Feldman
23
9
0
11 Sep 2020
METEOR: Learning Memory and Time Efficient Representations from
  Multi-modal Data Streams
METEOR: Learning Memory and Time Efficient Representations from Multi-modal Data Streams
Amila Silva
S. Karunasekera
C. Leckie
Ling Luo
AI4TS
24
2
0
23 Jul 2020
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model
  Compression
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao
Yujing Wang
Chufan Wu
Chen Zhang
Yang-Feng Wang
Yaming Yang
Quanlu Zhang
Yunhai Tong
Jing Bai
22
72
0
08 Apr 2020
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Paul Pu Liang
Manzil Zaheer
Yuan Wang
Amr Ahmed
BDL
12
1
0
18 Mar 2020
Embedding Compression with Isotropic Iterative Quantization
Embedding Compression with Isotropic Iterative Quantization
Siyu Liao
Jie Chen
Yanzhi Wang
Qinru Qiu
Bo Yuan
MQ
26
12
0
11 Jan 2020
Deep Self-representative Concept Factorization Network for
  Representation Learning
Deep Self-representative Concept Factorization Network for Representation Learning
Yan Zhang
Zhao Zhang
Zheng-Wei Zhang
Mingbo Zhao
Li Zhang
Zhengjun Zha
Meng Wang
23
15
0
13 Dec 2019
DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence
  Modeling
DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence Modeling
Sachin Mehta
Rik Koncel-Kedziorski
Mohammad Rastegari
Hannaneh Hajishirzi
AI4TS
36
23
0
27 Nov 2019
Improving Word Embedding Factorization for Compression Using Distilled
  Nonlinear Neural Decomposition
Improving Word Embedding Factorization for Compression Using Distilled Nonlinear Neural Decomposition
Vasileios Lioutas
Ahmad Rashid
Krtin Kumar
Md. Akmal Haidar
Mehdi Rezagholizadeh
23
9
0
02 Oct 2019
Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with
  Contextualized Embeddings
Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings
Gregor Wiedemann
Steffen Remus
Avi Chawla
Chris Biemann
27
174
0
23 Sep 2019
Compression of Recurrent Neural Networks for Efficient Language Modeling
Compression of Recurrent Neural Networks for Efficient Language Modeling
Artem M. Grachev
D. Ignatov
Andrey V. Savchenko
15
39
0
06 Feb 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,892
0
15 Sep 2016
Convolutional Neural Networks for Sentence Classification
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
279
13,373
0
25 Aug 2014
1